- Your power bank is lying to you about its capacity - sort of
- Cisco and Tele2 IoT: Co-Innovation Broadens IoT Benefits Across Industries
- Black Friday deal: Save up to $1,100 on this Sony Bravia 7 and Bar 8 bundle at Amazon
- Grab the 55-inch Samsung Odyssey Ark for $1,200 off at Best Buy ahead of Black Friday
- Page Not Found | McAfee Blog
AI arms race: Cybersecurity defenders in the age of evolving threats
As web application cyberattacks surge, defenders are on the frontline of an ever-evolving battlefield. With adversaries leveraging artificial intelligence (AI) to sharpen their assaults, defenders face unprecedented challenges. However, AI isn’t just empowering attackers — it’s also emerging as a crucial ally for defenders. Organizations can use AI’s capabilities and implement strong security training to identify and neutralize threats.
Rest assured, the rise in web application attacks signifies a persistent shift rather than a passing trend. A recent Global Threat Analysis Report found that in 2023, the total malicious web application and API transactions rose by 171%, due primarily to layer 7 encrypted web application attacks. The attackers’ primary targets are misconfigurations.
Barracuda’s Application Security system found that 30% of all attacks against web applications targeted security misconfigurations — such as coding and implementation errors, while 21% of attacks involved SQL code injection in December 2023. Additional top-level attack tactics included cross-site scripting (XSS) and cross-site request forgery (CSRF), which allows attackers to steal data or trick the victim into performing an action that they do not intend to. Now, entry-level bug bounty hunters typically use cross-site scripting to break into networks.
Barracuda found that two primary factors contribute to the spike in web application attacks. One is that a significant number of web applications harbor vulnerabilities or misconfigurations, leaving them susceptible to exploitation. Two, these applications often store highly sensitive information, including personal and financial data, making them prime targets for attackers seeking direct access to valuable data.
Armed with sophisticated AI-driven tools, attackers are refining their methods to bypass traditional defense measures. Injection attacks, cross-site scripting and an array of other tactics keep defenders on their toes, requiring swift and proactive responses. In this dynamic environment, AI not only enhances response capabilities but also reshapes the very narrative of cybersecurity.
AI plays two multifaceted roles — it serves as both a weapon for attackers and a defense weapon for defenders. Attackers use AI to launch more targeted and efficient attacks, while defenders race against time to reinforce their defenses.
Most recently, attackers have been using AI to automate generative content, notably in phishing attacks, to craft convincing phishing emails that resemble legitimate messages. Now, attackers can produce personalized and contextually relevant messages that improve their chances of success. AI facilitates the spoofing of authentic email addresses, analysis of publicly available data for tailored attacks, and the replication of the communication patterns of familiar contacts to dupe recipients. Additionally, AI-generated content often lacks the grammatical errors typically associated with fraudulent content, making it more challenging for traditional security measures to detect and prevent such attacks.
WormGPT and EvilGPT are two AI-powered tools that allow attackers to carry out zero-day attacks successfully. They use them to generate malicious attachments and dynamic malware payloads. Their goal is to create adaptive malware capable of modifying its behavior to evade detection.
Moreover, AI-powered botnets pose a threat with their potential for devastating distributed-denial-of-service (DDoS) attacks. By incorporating AI intelligence into attack tooling, adversaries can significantly amplify their impact while reducing the need for extensive human involvement and accelerating breach rates. Attackers also use AI to gain access to personal credentials, utilize deep fake content such as impersonation videos or extortion and ultimately leverage content localization to broaden their attack base.
However, AI doesn’t just empower adversaries, it also offers defenders a weapon in their arsenal. AI is a potent weapon for attackers and defenders must match their high level of sophistication to fortify their defenses and counteract these threats. Barracuda’s research found that more organizations are doing just that.
Roughly half (46%) of organizations surveyed say they are already using AI in cybersecurity, and 43% are planning to implement AI in the future. They’re utilizing AI to analyze vast datasets to pinpoint real threats and correlate signals across various attack surfaces while deploying natural language-based query builders to extract pertinent data and deliver targeted, personalized security awareness training.
AI-driven machine learning algorithms use threat detection and intelligence to sift through datasets and detect irregularities indicative of security breaches like unusual network traffic or unusual user behavior. Behavioral analytics monitor and identify suspicious activities to identify insider threats and abnormal access patterns.
Additionally, while attackers are targeting their victims with generative phishing attacks, cyber experts can use AI to stay a step ahead. Now, organizations can use AI to identify phishing patterns and signatures — scanning for irregular sending behavior, deviations or unusual email content using natural language processing. AI also excels at responding to security threats in real time. Applications such as automated incident identification, orchestration and playbook automation enhance identification and detection to improve threat detection.
It’s important to note that implementing AI-powered security solutions does not minimize the role humans play in strengthening their organizations’ security postures. Technology serves people, not the other way around. A joint research study found that human error was a contributing factor to 88% of security breaches. That’s why it’s important for organizations to leverage AI to implement smart-security training across the board — allowing organizations and users to better understand and feel confident in the technology to identify threats efficiently and effectively.
This shift from reactive to proactive defense marks a crucial turning point in the cybersecurity paradigm — and it’s important to implement robust security solutions to safeguard against the ever-improving AI techniques employed by attackers. While AI is a powerful tool for both defenders and attackers, it can ultimately be integrated with defense strategies — aiding organizations in significantly bolstering resilience and adaptability in the face of relentless cyber adversaries.
Cybersecurity companies find themselves at the forefront of an evolving battleground. Adversaries are using AI to refine their assaults, creating new challenges for defenders. Yet, AI also emerges as a crucial ally, empowering defenders with machine learning and predictive analytics to preemptively identify and neutralize threats, reshaping the cybersecurity narrative. Defenders must find ways to use AI equally — if not greater — for efficient threat detection, real-time incident response and comprehensive security training. By harnessing AI’s capabilities, organizations will strengthen their resilience and enhance adaptability, forging a formidable defense against relentless cyber adversaries.